Search Results: "jerome"

3 August 2015

Lunar: Reproducible builds: week 14 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes akira submitted a patch to make cdbs export SOURCE_DATE_EPOCH. She uploded a package with the enhancement to the experimental reproducible repository. Packages fixed The following 15 packages became reproducible due to changes in their build dependencies: dracut, editorconfig-core, elasticsearch, fish, libftdi1, liblouisxml, mk-configure, nanoc, octave-bim, octave-data-smoothing, octave-financial, octave-ga, octave-missing-functions, octave-secs1d, octave-splines, valgrind. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: In contrib, Dmitry Smirnov improved libdvd-pkg with 1.3.99-1-1. Patches submitted which have not made their way to the archive yet: reproducible.debian.net Four armhf build hosts were provided by Vagrant Cascadian and have been configured to be used by jenkins.debian.net. Work on including armhf builds in the reproducible.debian.net webpages has begun. So far the repository comparison page just shows us which armhf binary packages are currently missing in our repo. (h01ger) The scheduler has been changed to re-schedule more packages from stretch than sid, as the gcc5 transition has started This mostly affects build log age. (h01ger) A new depwait status has been introduced for packages which can't be built because of missing build dependencies. (Mattia Rizzolo) debbindiff development Finally, on August 31st, Lunar released debbindiff 27 containing a complete overhaul of the code for the comparison stage. The new architecture is more versatile and extensible while minimizing code duplication. libarchive is now used to handle cpio archives and iso9660 images through the newly packaged python-libarchive-c. This should also help support a couple other archive formats in the future. Symlinks and devices are now properly compared. Text files are compared as Unicode after being decoded, and encoding differences are reported. Support for Sqlite3 and Mono/.NET executables has been added. Thanks to Valentin Lorentz, the test suite should now run on more systems. A small defiency in unquashfs has been identified in the process. A long standing optimization is now performed on Debian package: based on the content of the md5sums control file, we skip comparing files with matching hashes. This makes debbindiff usable on packages with many files. Fuzzy-matching is now performed for files in the same container (like a tarball) to handle renames. Also, for Debian .changes, listed files are now compared without looking the embedded version number. This makes debbindiff a lot more useful when comparing different versions of the same package. Based on the rearchitecturing work has been done to allow parallel processing. The branch now seems to work most of the time. More test needs to be done before it can be merged. The current fuzzy-matching algorithm, ssdeep, has showed disappointing results. One important use case is being able to properly compare debug symbols. Their path is made using the Build ID. As this identifier is made with a checksum of the binary content, finding things like CPP macros is much easier when a diff of the debug symbols is available. Good news is that TLSH, another fuzzy-matching algorithm, has been tested with much better results. A package is waiting in NEW and the code is ready for it to become available. A follow-up release 28 was made on August 2nd fixing content label used for gzip2, bzip2 and xz files and an error on text files only differing in their encoding. It also contains a small code improvement on how comments on Difference object are handled. This is the last release name debbindiff. A new name has been chosen to better reflect that it is not a Debian specific tool. Stay tuned! Documentation update Valentin Lorentz updated the patch submission template to suggest to write the kind of issue in the bug subject. Small progress have been made on the Reproducible Builds HOWTO while preparing the related CCCamp15 talk. Package reviews 235 obsolete reviews have been removed, 47 added and 113 updated this week. 42 reports for packages failing to build from source have been made by Chris West (Faux). New issue added this week: haskell_devscripts_locale_substvars. Misc. Valentin Lorentz wrote a script to report packages tested as unreproducible installed on a system. We encourage everyone to run it on their systems and give feedback!

20 June 2015

Lunar: Reproducible builds: week 5 in Stretch cycle

What happened about the reproducible builds effort for this week: Toolchain fixes Uploads that should help other packages: Patch submitted for toolchain issues: Some discussions have been started in Debian and with upstream: Packages fixed The following 8 packages became reproducible due to changes in their build dependencies: access-modifier-checker, apache-log4j2, jenkins-xstream, libsdl-perl, maven-shared-incremental, ruby-pygments.rb, ruby-wikicloth, uimaj. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which did not make their way to the archive yet: Discussions that have been started: reproducible.debian.net Holger Levsen added two new package sets: pkg-javascript-devel and pkg-php-pear. The list of packages with and without notes are now sorted by age of the latest build. Mattia Rizzolo added support for email notifications so that maintainers can be warned when a package becomes unreproducible. Please ask Mattia or Holger or in the #debian-reproducible IRC channel if you want to be notified for your packages! strip-nondeterminism development Andrew Ayer fixed the gzip handler so that it skip adding a predetermined timestamp when there was none. Documentation update Lunar added documentation about mtimes of file extracted using unzip being timezone dependent. He also wrote a short example on how to test reproducibility. Stephen Kitt updated the documentation about timestamps in PE binaries. Documentation and scripts to perform weekly reports were published by Lunar. Package reviews 50 obsolete reviews have been removed, 51 added and 29 updated this week. Thanks Chris West and Mathieu Bridon amongst others. New identified issues: Misc. Lunar will be talking (in French) about reproducible builds at Pas Sage en Seine on June 19th, at 15:00 in Paris. Meeting will happen this Wednesday, 19:00 UTC.

23 October 2014

Matthew Garrett: Linux Container Security

First, read these slides. Done? Good.

(Edit: Just to clarify - these are not my slides. They're from a presentation Jerome Petazzoni gave at Linuxcon NA earlier this year)

Hypervisors present a smaller attack surface than containers. This is somewhat mitigated in containers by using seccomp, selinux and restricting capabilities in order to reduce the number of kernel entry points that untrusted code can touch, but even so there is simply a greater quantity of privileged code available to untrusted apps in a container environment when compared to a hypervisor environment[1].

Does this mean containers provide reduced security? That's an arguable point. In the event of a new kernel vulnerability, container-based deployments merely need to upgrade the kernel on the host and restart all the containers. Full VMs need to upgrade the kernel in each individual image, which takes longer and may be delayed due to the additional disruption. In the event of a flaw in some remotely accessible code running in your image, an attacker's ability to cause further damage may be restricted by the existing seccomp and capabilities configuration in a container. They may be able to escalate to a more privileged user in a full VM.

I'm not really compelled by either of these arguments. Both argue that the security of your container is improved, but in almost all cases exploiting these vulnerabilities would require that an attacker already be able to run arbitrary code in your container. Many container deployments are task-specific rather than running a full system, and in that case your attacker is already able to compromise pretty much everything within the container. The argument's stronger in the Virtual Private Server case, but there you're trading that off against losing some other security features - sure, you're deploying seccomp, but you can't use selinux inside your container, because the policy isn't per-namespace[2].

So that seems like kind of a wash - there's maybe marginal increases in practical security for certain kinds of deployment, and perhaps marginal decreases for others. We end up coming back to the attack surface, and it seems inevitable that that's always going to be larger in container environments. The question is, does it matter? If the larger attack surface still only results in one more vulnerability per thousand years, you probably don't care. The aim isn't to get containers to the same level of security as hypervisors, it's to get them close enough that the difference doesn't matter.

I don't think we're there yet. Searching the kernel for bugs triggered by Trinity shows plenty of cases where the kernel screws up from unprivileged input[3]. A sufficiently strong seccomp policy plus tight restrictions on the ability of a container to touch /proc, /sys and /dev helps a lot here, but it's not full coverage. The presentation I linked to at the top of this post suggests using the grsec patches - these will tend to mitigate several (but not all) kernel vulnerabilities, but there's tradeoffs in (a) ease of management (having to build your own kernels) and (b) performance (several of the grsec options reduce performance).

But this isn't intended as a complaint. Or, rather, it is, just not about security. I suspect containers can be made sufficiently secure that the attack surface size doesn't matter. But who's going to do that work? As mentioned, modern container deployment tools make use of a number of kernel security features. But there's been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:
These aren't easy jobs, but they're important, and I'm hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it's going to be far too easy to write containers off as a "convenient, cheap, secure: choose two" tradeoff. That's not a winning strategy.

[1] Companies using hypervisors! Audit your qemu setup to ensure that you're not providing more emulated hardware than necessary to your guests. If you're using KVM, ensure that you're using sVirt (either selinux or apparmor backed) in order to restrict qemu's privileges.
[2] There's apparently some support for loading per-namespace Apparmor policies, but that means that the process is no longer confined by the sVirt policy
[3] To be fair, last time I ran Trinity under Docker under a VM, it ended up killing my host. Glass houses, etc.

comment count unavailable comments

22 October 2014

Sylvain Le Gall: Release of OASIS 0.4.5

On behalf of Jacques-Pascal Deplaix I am happy to announce the release of OASIS v0.4.5. Logo OASIS small OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily. This tool is freely inspired by Cabal which is the same kind of tool for Haskell. You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website. Here is a quick summary of the important changes: Features: This new version is a small release to catch up with all the fixes/pull requests present in the VCS that have not yet been published. This should made the life of my dear contributors easier -- thanks again for being patient. I would like to thanks again the contributor for this release: Christopher Zimmermann, Jerome Vouillon, Tomohiro Matsuyama and Christoph H ger. Their help is greatly appreciated.

30 March 2013

Evgeni Golov: Opera, standards and why I should have stayed in my cave

So you probably heard that I have that little new project of mine: QiFi the pure JavaScript WiFi QR Code Generator. It s been running pretty well and people even seem to like it. One of its (unannounced) features is a pretty clean stylesheet that is used for printing. When you print the result will be just the SSID and the QR code, so you can put that piece of paper everywhere you like. That works (I tested!) fine on Iceweasel/Firefox 10.0.12 and Chromium 25.0. Today I tried to do the same in Opera 12.14 and it failed terribly: the SSID was there, the QR code not. And here my journey begins First I suspected the CSS I used was fishy, so I kicked all the CSS involved and retried: still no QR code in the print-out. So maybe it s the QR code library I use that produces a weird canvas? Nope, the examples on http://diveintohtml5.info/canvas.html and http://devfiles.myopera.com/articles/649/example5.html don t print either. Uhm, let s Google for opera canvas print And oh boy I should not have done that. It seems it s a bug in Opera. And the proposed solution is to use canvas.toDataURL() to render the canvas as an image and load the image instead of the canvas. I almost went that way. But I felt that urge need to read the docs before. So I opened http://www.w3.org/html/wg/drafts/html/master/embedded-content-0.html#dom-canvas-todataurl and https://developer.mozilla.org/en-US/docs/DOM/HTMLCanvasElement and started puking:
When trying to use types other than image/png , authors can check if the image was really returned in the requested format by checking to see if the returned string starts with one of the exact strings data:image/png, or data:image/png; . If it does, the image is PNG, and thus the requested type was not supported. (The one exception to this is if the canvas has either no height or no width, in which case the result might simply be data:, .)
If the type requested is not image/png, and the returned value starts with data:image/png, then the requested type is not supported.
Really? I have to check the returned STRING to know if there was an error? Go home HTML5, you re drunk! Okay, okay. No canvas rendered to images then. Let s just render the QR code as a <table> instead of a <canvas> when the browser looks like Opera. There is nothing one could do wrong with tables, right? But let s test with the basic example first: Yes, this is 2013. Yes, this is Opera 12.14. Yes, the rendering of a fucking HTML table is wrong. Needles to say, Iceweasel and Chromium render the example just fine. I bet even a recent Internet Explorer would That said, there is no bugfixworkaround for Opera I want to implement. If you use Opera, I feel sorry for you. But that s all. Update: before someone cries ZOMG! BUG PLZ!!! , I filled this as DSK-383716 at Opera.

31 December 2010

Debian News: New Debian Developers (December 2010)

The following developers got their Debian accounts in the last month: Congratulations!

The following developers have returned as Debian Developers after having retired at some time in the past:

Welcome back!

3 December 2010

Pietro Abate: distcheck vs edos-debcheck

This is the second post about distcheck. I want to give a quick overview of the differences between edos-distcheck and the new version. First despite using the same sat solver and encoding of the problem, Distcheck has been re-written from scratch. Dose2 has several architectural problems and not very well documented. Adding new features had become too difficult and error-prone, so this was a natural choice (at least for me). Hopefully Dose3 will survive the Mancoosi project and provide a base for dependency reasoning. The framework is well documented and the architecture pretty modular. It's is written in ocaml, so sadly, I don't expect many people to join the development team, but we'll be very open to it. These are the main differences with edos-debcheck .

Performances distcheck is about two times faster than edos-debcheck (from dose2), but it is a "bit" slower then debcheck (say the original debcheck), that is the tool wrote by Jerome Vouillon and that was then superseded in debian by edos-debcheck. The original debcheck was a all-in-one tool that did the parsing, encoding and solving without converting the problem to any intermediate format. distcheck trades a bit of speed for generality. Since it is based on Cudf, it can handle different formats and can be easily adapted in a range of situation just by changing the encoding of the original problem to cudf. Below there are a couple of test I've performed on my machine (debian unstable). The numbers speak alone.
$time cat tmp/squeeze.packages edos-debcheck -failures > /dev/null
Completing conflicts... * 100.0%
Conflicts and dependencies... * 100.0%
Solving * 100.0%

real 0m19.515s
user 0m19.193s
sys 0m0.276s
$time ./distcheck.native -f deb://tmp/squeeze.packages > /dev/null

real 0m10.859s
user 0m10.669s
sys 0m0.172s

Input The second big difference is about different input format. In fact, at the moment, we have two different tools in debian, one edos-debcheck and the other edos-rpmcheck. Despite using the same underlying library these two tools have different code bases. distcheck basically is a multiplexer that convert different inputs to a common format and then uses it (agnostically) to solve the installation problem. It can be called in different ways (via symlinks) to behave similarly to its predecessors. At the moment we are able to handle 5 different formats
  1. deb:// Packages 822 format for debian based distributions
  2. hdlist:// a binary format used by rpm based distribution
  3. synth:// a simplified format to describe rpm based package repositories
  4. eclipse:// a 822 based format that encoded OSGi plugings metadata
  5. cudf:// the native cudf format
distcheck handles gz and bz2 compressed file transparently . However if you care about performances, you should decompress your input file first and the parse it with distcheck and it often takes more time to decompress the file on the fly that run the installability test itself. There is also an experimental database backend that is not compiled by default at them moment.

Output Regarding the output, I've already explained the main differences in an old post. As a quick reminder, the old edos-debcheck had two output options. The first is a human readable - unstructured output - that was a handy source of information when running the tool interactively. The second was a xml based format (without a dtd or a schema, I believe) that was used for batch processing. distcheck has only one output type in the YAML format that aims to be human and machine readable. Hopefully this will cater for both needs. Moreover, just recently I've added the output of distcheck a summary of who is breaking what. The output of edos-debcheck was basically a map of packages to the reasons of the breakage. In addition to this information distcheck gives also a maps between reason (a missing dependency or a conflict) to the list of packages that are broken by this problem.This additional info is off by default, but I think it can be nice to know what is the missing dependency that is responsible for the majority of problems in a distribution... For example, calling distcheck with --summary :
$./distcheck.native --summary deb://tests/sid.packages
backgroud-packages: 29589
foreground-packages: 29589
broken-packages: 143
missing-packages: 138
conflict-packages: 5
unique-missing-packages: 52
unique-conflict-packages: 5
summary:
-
missing:
missingdep: libevas-svn-05-engines-x (>= 0.9.9.063)
packages:
-
package: enna-dbg
version: 0.4.0-4
architecture: amd64
source: enna (= 0.4.0-4)
-
package: enna
version: 0.4.0-4
architecture: amd64
source: enna (= 0.4.0-4)
-
missing:
missingdep: libopenscenegraph56 (>= 2.8.1)
packages:
-
package: libosgal1
version: 0.6.1-2+b3
architecture: amd64
source: osgal (= 0.6.1-2)
-
package: libosgal-dev
version: 0.6.1-2+b3
architecture: amd64
source: osgal (= 0.6.1-2)
Below I give a small example of the edos-debcheck output compared to the new yaml based output.
$cat tests/sid.packages edos-debcheck -failures -explain
Completing conflicts... * 100.0%
Conflicts and dependencies... * 100.0%
Solving * 100.0%
zope-zms (= 1:2.11.1-03-1): FAILED
zope-zms (= 1:2.11.1-03-1) depends on missing:
- zope2.10
- zope2.9
zope-tinytableplus (= 0.9-19): FAILED
zope-tinytableplus (= 0.9-19) depends on missing:
- zope2.11
- zope2.10
- zope2.9
...
And an extract from the distcheck output (the order is different. I cut and pasted parts of the output here...)
$./distcheck.native -f -e deb://tests/sid.packages
report:
-
package: zope-zms
version: 1:2.11.1-03-1
architecture: all
source: zope-zms (= 1:2.11.1-03-1)
status: broken
reasons:
-
missing:
pkg:
package: zope-zms
version: 1:2.11.1-03-1
architecture: all
missingdep: zope2.9 zope2.10
-
package: zope-tinytableplus
version: 0.9-19
architecture: all
source: zope-tinytableplus (= 0.9-19)
status: broken
reasons:
-
missing:
pkg:
package: zope-tinytableplus
version: 0.9-19
architecture: all
missingdep: zope2.9 zope2.10 zope2.11
...

Future The roadmap to release version 1.0 of distcheck is as follows:
  1. add background and foreground package selection. This feature will allow the use to specify a larger universe (background packages), but check only a subset of this universe (foreground packages). This should allow users to select packages using grep-dctrl and then pipe them to discheck . At the moment we can select individual packages on the command line or we can use expression like bash (<= 2.7) to check all version of bash in the universe with version greater than 2.7.
  2. code cleanup and a bit of refactoring between distcheck and buildcheck (that is a frontend for distcheck that allow us to report broken build dependencies)
  3. consider essential packages while performing the installation test. Here there are few things we have to understand, but the idea would be to detect possible problems related the implicit presence of essential packages in the distribution. At the moment, distcheck performs the installation test in the empty universe, while ideally, the universe should contain all essential packages.
  4. finish the documentation. The effort in underway and we hope to finalize shortly to release the debian package in experimental.

27 April 2010

Matthew Garrett: Radeon reclocking

Alex released another set of Radeon power management patches over the weekend, and I've been adding my own code on top of that (Alex's patches go on top of drm-next, mine go on top of there). I've left it stress-testing for a couple of hours without it falling over, which tells me that it's stable enough that I can feel smug. This is a pleasing counterpoint to the previous experiences I've been having, which have been rife with a heady mixture of chip lockups or kernel deadlocks. It turns out that gpus are hard.

There's a few things you need to know about gpus. The first is that if they're discrete devices they typically have their own memory controller and video memory. The second is that there's an impressive number of ways that you can end up touching that memory. The third is that they tend to get upset if something tries to touch that memory and the memory controller is in the middle of reclocking at the time.

The first and most obvious use of video memory is by the gpu itself. Accelerated operations on radeon are carried out by sending a command packet to the command processor. This is achieved by sharing a ring buffer between the cpu and the gpu, with the gpu reading packets out of that ring buffer and performing the operations contained within them. Many of these operations will touch video memory (that being the nature of most things you want a gpu to do), and if that happens bad things occur. Like the card locking up and potentially taking your PCI bus with it.

So, obviously, we don't want that to happen. The first thing we do is take a mutex that blocks any further accelerated operations from being submitted by userspace. Then we wait until we get an interrupt from the gpu telling us that the display engine has gone idle. The problem here is that we don't have a terribly good idea of how many more operations there are to complete and we don't know how long each of those operations is going to take, but this is less bad than some of the alternatives[1]. Jerome Glisse has some ideas on how to improve this to require less waiting, but the effects should still be pretty much invisible to the average user.

So we've stopped the command processor touching ram. Everything's good, right?

Well, not really. The obvious problem is that users typically want to display something, so there's a separate chunk of chip that's repeatedly copying video memory over to your monitor. That's got to go too. Thankfully, there's a convenient bit in the crtc registers that lets us turn that off, but the pretty unsurprising downside is that your screen goes blank while that's happening. So we don't want to do that. Instead, we try to perform the reclock while there's nothing being displayed on the screen - that is, while we're in the screen region where a crt's electron gun would be scanning back from the bottom of the screen to the top. It turns out that rather a lot of display assumptions depend on this happening even if there's no crt, no electron gun and no thick sheet of glass with a decent approximation of vacuum behind it, so we get to do this even if we're displaying to an LVDS. And we have about 400-500 microseconds to do it - an almost generous amount of time.

So we ask the hardware to generate an interrupt when we enter vblank and then we reclock. Except the hardware has an irritating habit of lying - sometimes we get the interrupt a line or two before vblank, sometimes we get it after we've already gone out the other side. Vexing, and not entirely solved yet - so sometimes you'll still get a single blank frame during reclock. But there are plans, and they'll probably even work.

At this point the acceleration hardware isn't touching the memory and the scanout hardware isn't touching the memory. Except it still crashes under some workloads. This one took me longer to track down, but the answer turned out to be pretty straightforward. Not all operations are accelerated. When they're not accelerated they have to be done in software. That means that the CPU has to write to the video memory itself. I'm sure you can see where this is going. This was fixed without too much trouble once I'd finished picking through the driver to work out every location where objects might be mapped into the CPU's address space, at which point it's a simple matter of unmapping them and blocking the fault handler from remapping them until the reclock is finished. Linux, thankfully, has lots of synchronisation primitives. And now everything works.

Except when it doesn't. This took a final period of head scratching, followed by the discovery that ttm (the memory allocator used by radeon) has a background thread that would occasionally fire to clean up objects. And by clean up objects, I mean change their mapping - which means updating their status in the gart, which means touching video memory. So, let's block that. And that tripped me off to the fact that even if it couldn't submit new commands, the CPU could still create or destroy objects - with the same consequences.

So, once all of these are blocked, video memory is quiescent and we can do what we want. And we do, at least once I'd sorted out the bits where I was taking locks in the wrong order and deadlocking. Depending on the powerplay tables available on your card we'll chose different rates and so your power savings will vary heavily depending on the values that your vendor provided, but the card I'm testing on sees a handy 30W drop at idle. Right now we're only changing clocks and not dropping voltage so there's potentially a little more to come.

While getting this stable was pretty miserable, the documented entry points for clock changing made a lot of this easier than it would otherwise have been. It's also probably worth noting that Intel's clock configuration registers are entirely missing from any public documentation and the dirver Intel submitted to make them work in their latest chips appeared to have been deliberately obfuscated, so thanks to AMD for making all of this possible.

[1] It's possible to insert various command packets that either indicate when they've passed or stall until a register value gets updated, but these either cause awkward problems with the cp locking or mean that the gui idle functionality never goes idle, so they're not ideal either.

31 December 2009

John Goerzen: My Reading List for 2010

I can hear the question now: What kind of guy puts The Iliad and War and Peace on a list of things to read for fun? Well, me. I think that reading things by authors I ve never read before, people that take positions I haven t heard of before or don t agree with, or works that are challenging, will teach me something. And learning is fun. My entire list for 2010 is at Goodreads. I ve highlighted a few below. I don t expect to read all 34 books on the Goodreads list necessarily, but there is the chance. The Iliad by Homer, 750BC, trans. by Alexander Pope, 704 pages. A recent NPR story kindled my interest in this work. I m looking forward to it. The Oxford History of the Classical World by Boardman, Griffin, and Murray, 1986, 882 pages. It covers ancient Greece and Rome up through the fall of the Roman empire. The Fires of Heaven (Wheel of Time #5) by Robert Jordan, 1994, 912 pages. I ve read books 1 through 4 already, and would like to continue on the series. War and Peace by Lev Leo Nikolayevich Tolstoy, 1869, 1392 pages. Been on my list for way too long. Time to get to it. Haven t read anything by Tolstoy before. The Politics of Jesus by John Howard Yoder, 1972, 2nd ed., 270 pages. Aims to dispel the notion of Jesus as apolitical. An Intimate History of Humanity by Theodore Zeldin, 1996, 496 pages. Picked this up at Powell s in Portland on a whim, and it s about time I get to it. The Myth of a Christian Nation: How the Quest for Political Power Is Destroying the Church by Gregory A. Boyd, 2007, 224 pages. An argument that the American evangelical church allowed itself to be co-opted by the political right (and some on the left) and argues this is harmful to the church. Also challenges the notion that America ever was a Christian nation. Daily Life in Ancient Rome: The People and the City at the Height of the Empire, by Jerome Carcopino, 2003, 368 pages. I ve always been fascinated with how things were on the ground rather than at the perspective of generals and kings, and this promises to be interesting. Slavery, Sabbath, War, and Women: Case Issues in Biblical Interpretation (Conrad Grebel Lectures) by Willard M. Swartley, 1983, 368 pages. Looking at how people have argued from different Biblical perspectives about various issues over the years. To the Lighthouse by Virginia Woolf, 1927, 252 pages. I can t believe I ve never read Woolf before. Yet another one I m really looking forward to. Tales of the Jazz Age by F. Scott Fitzgerald, 1922, 319 pages. Per Goodreads: This book of five confessional essays from the 1930s follows Fitzgerald and his wife Zelda from the height of their celebrity as the darlings of the 1920s to years of rapid decline leading to the self-proclaimed Crack Up in 1936. Ulysses by James Joyce, 1922 (1961 unabridged version), 783 pages. The Future of Faith by Harvey Cox, 2009, 256 pages. Per Goodreads, Cox explains why Christian beliefs and dogma are giving way to new grassroots movements rooted in social justice and spiritual experience. Heard about this one in an interview with Diane Rehm. Being There by Jerzy Kosi ski, 1970, 128 pages. Jesus: Uncovering the Life, Teachings, and Relevance of a Religious Revolutionary by Marcus Borg, 2006, 352 pages. Whether or not you agree with Borg, this has got to be a thought-provoking title. The Three Musketeers by Alexandre Dumas, 1844, 640 pages. The Book of Tea by Kakuzo Okakura, 1906, 154 pages. Per Goodreads: In 1906 in turn-of-the century Boston, a small, esoteric book about tea was written with the intention of being read aloud in the famous salon of Isabella Gardner. It was authored by Okakura Kakuzo, a Japanese philosopher, art expert, and curator. Little known at the time, Kakuzo would emerge as one of the great thinkers of the early 20th century, a genius who was insightful, witty and greatly responsible for bridging Western and Eastern cultures. Nearly a century later, Kakuzo s The Book of Tea is still beloved the world over. Interwoven with a rich history of tea and its place in Japanese society is poignant commentary on Eastern culture and our ongoing fascination with it, as well as illuminating essays on art, spirituality, poetry, and more. More of my list is at Goodreads.

17 October 2009

Russell Coker: Links October 2009

Garik Israelian gave an interesting TED talk about spectrography of stars and SETI [1]. He assumes that tectonic activity is a pre-requisite for the evolution of life (when discussing the search for elements that are needed for life) and that life which is based on solar energy will have a similar spectrographic signature to the chlorophyl based plants that we are familiar with. I doubt both those assumptions, but I still found the talk very interesting and I learned a lot. Julian Dibbell wrote an interesting Wired article about an ongoing battle between the Cult of Scientology and 4chan [2]. I don t often barrack for 4chan, but they seem to be doing some good things here but of course they do it in their own unique manner. The article also links to a hilarious video of Tom Cruise being insane, among other things he claims that Scientologists are only ones who can help at an accident site. Has Tom Cruise ever provided assistance at a car crash? The Independent has an article by Robert Fisk about the impending shift away from the US dollar for the oil trade [3]. This is expected to cause a significant loss in the value of the US dollar. Robin Marantz Henig wrote an interesting article for the NY Times about the causes of anxiety [4]. It focusses on Jerome Kagan s longitudal studies of babies and young people. One thing that I found particularly interesting were the research issues of recognising the difference between brain states, internal emotional state, and the symptoms of emotions that people display (including their own description of their emotions which may be misleading or false). The tests on teenage social interactions that involved fake myspace pages and an MRI were also interesting. Juan Cole wrote an insightful Salon article titled The top ten things you didn t know about Iran [5]. The Iranian government doesn t seem to be a threat to anyone outside their country. Clay Shirky wrote an insightful post about TV being a heat-sink for excess spare time and considers the number of projects of the scale of Wikipedia could be created with a small portion of that time [6]. It seems that the trend in society is to spend less time watching TV and more time doing creative things. In a related note Dan Meyer has an interesting blog post about trolls who say You Have No Life [7]. The Making Light blog post about Barack Obama s Nobel Peace Prize has some insightful comments [8]. I doubted that he had achieved enough to deserve it, but the commentators provide evidence that he has achieved a lot. I wonder if he will receive a second Peace Prize sometime in the next 10 years. The Making Light blog post about bullies and online disputes predictably got diverted to discussion school bullying [9]. The comments are interesting. Making Light has a mind-boggling post about homosexuality and porn [10]. US Senator Tom Coburn s (R-OK) chief of staff Michael Schwartz made the case against pornography. All pornography is homosexual pornography , said Schwartz, quoting an ex-gay friend of his. Among other things there are many good puns in the comments. Cubicle Jungle is an amusing satire of the computer industry [11]. It s shocking how long it goes before it gets to the part that s obviously fiction. The WikiReader is an interesting new device [12]. It costs $99US and has a copy of Wikipedia on an SD card, they have a subscription service that involves posting you a new SD card every 6 months, or you can download an image from their server. They state that they have a filtered version of Wikipedia for children, I wonder how they manage that, I also wonder whether they have an unfiltered version for adults. The device runs on two AAA batteries and is designed to be cheap and easy. Naturally it doesn t support editing, but most of the time that you need Wikipedia you don t need edit access or access to content that is less than 6 months old. Exetel are scum, customers who complain are cut off [13]. Making Light has an interesting post about a New Age scumbag who killed at least two of the victims who paid $10,000 for a sweat-lodge workshop (others are in hospital and may die in the near future) [14]. The NY Times has an interesting article about Jamie Oliver and his latest project [15]. He is trying to reform the eating habits of the unhealthiest area in the US. The 15 pound burger sounds interesting though, I wouldn t mind sharing one of those with 14 friends

8 February 2009

Peter Van Eynde: @ fosdem but also not really

This year I managed to go to fosdem every day, even at the beer event. Not that I attended many talks: I was quite busy getting the network to work. We got wireless in almost all locations in the end. Setting up and fixing the problems took most of Saturday. On Sunday we added the final 'experimental' room via a wireless bridge link across the square, with the beam over the heads of the people in the queue for Belgian fries.

In the end it all worked and we had only a few configuration and many cable problems. I must say it was more for to 'work' at fosdem then to just be there. May thanks to Jerome Paquay who actually arranged to lend the equipment from our employer (Cisco) and to configure it. Thanks for AY for ... well being AY.

Next year n-mode? serious uplinks?


18 November 2008

Matthew Garrett: Aggressive graphics power management

My current desktop PC has an RS790-based Radeon on-board graphics controller. It also has a Radeon X1900 plugged in. Playing with my Watts Up, I found that the system was (at idle!) drawing around 35W more power with the X1900 than with the on-board graphics.

This is clearly less than ideal.

Recent Radeons all support dynamic clock gating, a technology where the clocks to various bits of the chip are turned off when not in use. Unfortunately it seems that this is generally already enabled by the BIOS on most hardware, so playing with that didn't give me any power savings. Next I looked at Powerplay, the AMD technology for reducing clocks and voltages. It turns out that my desktop hardware doesn't provide any Powerplay tables, so no joy there either. What next?

Radeons all carry a ROM containing a bunch of tables and scripts written in a straightforward bytecode language called Atom. The idea is that OS-specific drivers can call the Atom tables to perform tasks that are hardware dependent, even without knowledge of the specific low-level nature of the hardware they're driving. You can use Atom to do several things, from card initialisation through mode setting to (crucially) setting the clock frequencies. Jerome Glisse wrote a small utility called Atomtools that lets you execute Atom scripts and set the core and RAM frequencies. Playing with this showed that it was possible to save the best part of 5W by underclocking the graphics core, and about the same again by reducing the memory clock. A total saving of 9-10W was pretty significant.

The main problem with reducing the memory clock was that doing it while the screen is being scanned out results in memory corruption, showing up as big ugly graphical artifacts on the screen. I'm a fan of doing power management as aggressively as possible, which means reclocking the memory whenever the system is idle. Turning the screen off to reclock the memory would avoid the graphical corruption but introduce irritating flicker, so that wasn't really an option. The next plan was to synchronise the memory reclocking to the vertical refresh interval, the period of time between the bottom of a frame and the top of the next frame being drawn. Unfortunately setting the memory frequency took somewhere between 2 and 20 milliseconds, far too long to finish inside that time period.

So. Just using Atom was clearly not going to be possible. The next step was to try writing the registers directly. Looking at the R500 register documentation showed that the MPLL_FUNC_CNTL register contained the PLL dividers for the memory clock. Simply smacking a new value in here would allow changing the frequency of the memory clock with a single register write. It even worked. Almost. I could change the frequency within small ranges, but going any further resulted in increasingly severe graphical corruption. Unlike the sort I got with the Atom approach to changing the frequency, this corruption manifested itself as a range of effects from shimmering on the screen down to blocks of image gradually disappearing in an impressively trippy (though somewhat disturbing) way.

Next step was to perform a register dump before and after changing the frequencies via Atom, and compare them to the registers I was programming. MC_ARB_RATIO_CLK_SEQ was consistently different, which is where things got interesting. The AMD docs helpfully describe this register as "Magic field, please use the excel programming guide. Sets the hclk/sclk ratio in the arbiter", about as helpful as being told that the register contents are defined by careful examination of a series of butterflies kept somewhere in Taiwan. Now what?

Back to Atomtools. Enabling debugging let me watch a dump of the Atom script as it ran. The relevant part of the dump is here. The most significant point was:
MOVE_REG @ 0xBC09
src: ID[0x0000+B39E].[31:0] -> 0xFF7FFF7F
dst: REG[0xFE16].[31:0] <- 0xFF7FFF7F
, showing that the value in question was being read out of a table in the video BIOS (ID[0x0000+B39E] indicating the base of the ROM plus 0xB39E). Looking further back showed that WS[0x40] contained a number that was used as an index into the table. Grepping the header files gave 0x40 as ATOM_WS_QUOTIENT, containing the quotient of a division operation immediately beforehand. Working back from there showed that the value was derived from a formula involving the divider frequencies of the memory PLL and the source PLL. Reimplementing that was trivial, and now I could program the same register values. Hurrah!

It didn't work, of course. These things never do. It looked like modifying this value didn't actually do anything unless the memory controller was reinitialised. Looking through the Atom dump showed that this was achieved by calling the MemoryDeviceInit script. Reimplementing this from scratch was one option, but it had a bunch of branches and frankly I'm lazy and that's why I work on this Linux stuff rather than getting a proper job. This particular script was fast, so there was no real reason to do it by hand instead of just using the interpreter. Timing showed that doing so could easily be done within the vblank interval. This time, it even worked.

I've done a proof of concept that involved wedging this into the Radeon DRM code with extreme prejudice, but it needs some rework. However, it demonstrates that it's possible to downclock the memory whenever the screen is idle without there being any observable screen flicker. Combine that with GPU downclocking and we can save about 10W without any noticable degradation in performance or output. Victory!

I gave the code to someone with an X1300 and it promptly corrupted their screen and locked their machine up. Oh well. Turns out that they have a different memory controller or some such madness.

So, obviously, there's more work to be done on this. I've put some test code here. It's a small program that should be run as root. It should reprogram an Atom-based discrete graphics card[1] to half its memory clock. Running it again will halve it again. I don't recommend doing that. You'll need to reboot to get the full clock back. This isn't vblank synced, so it may introduce some graphical corruption. If the corruption is static (ie, isn't moving or flickering) then that's fine. If it's moving then I (and/or the docs) suck and there's still work to be done. If your machine hangs then I'm interested in knowing what hardware you have and may have some further debugging code to be run. Unless you have an X1300, in which case it's known to break and what were you thinking running this code you crazy mad fool.

Once this is stable it shouldn't take long to integrate it into the DRM and X layers. I'm also trying to get hold of some mobile AMD hardware to test what impact we can have on laptops.

[1] Shockingly enough, it's somewhat harder to underclock graphics memory on a shared memory system

23 September 2008

Zak B. Elep: Broken Comment Posting Fixed; New #olpc-ph Grassroots Group

Thanks to Wolfger for pointing out an HTTP 500 while trying to greet me by comment. :D I totally forgot about upgrading Net::Akismet to the new version that comes with my patch, as described previously. All the while I was thinking Net::Akismet was doing too good of a job keeping out comment spam, when it may as well have kept out all comments :( But its my fault, really. Among other things, Jerome has finally started a new grassroots group for the OLPC effort here in the Philippines, fresh from the success at the recently-concluded SFD. There's now an IRC channel up too, with logs available locally (text-only, until I can set up a web and fileserver frontend.) I'm personally interested on running a port of Inferno and perhaps help out on the TODOs for it.

5 September 2008

Matthew Garrett: Power management and graphics

It's the X Development Summit in Edinburgh this week, so I've been hanging out with the graphics crowd. There hasn't been a huge amount of work done in the field of power management in graphics so far - Intel have framebuffer compression and there's the lvds reclocking patch I wrote (I've cleaned this up somewhat since then, but it probably wants some refactoring to avoid increasing CPU usage based on its use of damage for update notifications). That still leaves us with some fun things to look at, though.

The most obvious issue is the gpu clock. Intel's chipset implements clock gating, where unused sections of chip automatically unclock themselves. This is pleasingly transparent to the OS, and we get power savings without any complexity we have to care about. However, there's no way to control the core clock of the GPU - it's part of the northbridge and fiddling with the clocking of that would be likely to upset things. Nvidia and Radeon hardware is more interesting in this respect, since we can control the gpu clock independently of anything else. The problem is trying to do so in a reasonable way.

In an ideal universe, we can change these clock frequencies quickly and without any visual artifacts. That way it's possible to leave it in the lowest state and clock it up as load develops. There's a couple of problems with this - non ideal hardware, and the software in the first place. Jerome's been testing a little on Radeon and discovered that changing the memory clock through Atom results in visual corruption. It's conceivable that this is due to some memory transaction cycles getting corrupted as the clock gets changed. If we could ensure that the reclock happens during the vertical blank interval, that's something that could potentially be avoided (of course, then we have the entertainment of working out when the vertical blank interval actually is when you have a dual head output...). The other problem is that 3D software tends to consume as many resources as are available. Games will produce as many frames per second as possible. Demand-based clocking will simply ramp the gpu to full speed in that situation, which isn't necessarily what you want in the battery case (as the number of frames per second goes up, so does the cpu usage - even more power draw) but is probably pretty reasonable in the powered case.

Handwavy testing suggests that this can save a pretty significant amount of power, so it's something that I'm planning on working on. Further optimisations include things like making sure that we're not running any PLLs that aren't being used at the time (oscillators chew power), not powering up output ports when you're not outputting to them and enabling any hardware-level features that we're currently ignoring. And, ideally, doing all of this without causing the machine to hang on a regular basis.

27 March 2008

Stefano Zacchiroli: galax 1.1

Galax 1.1: released ... and in Debian/unstable! After a long release cycle, a couple of days ago Galax 1.1 (the first release of the 1.x series) was finally released. Galax is a free (as in freedom) implementation of XQuery, the XML Query language. Galax is Schema-aware, statically typed, and really fast. The release enjoyed a good synergy among the Debian packagers (erm ... /me) and upstream authors which were happy to exchange patches and let me test drive the release before making it public, to check whether everything was fine for Debian or not. Thank you Jerome and Mary! Thanks to these efforts Galax 1.1 is now available in Debian/unstable. Enjoy your XQueries!

12 September 2007

Brice Goglin: Debian X.org notes - Got specs?

So, this time it was for real! AMD really meant it when they announce the release of specifications of ATI boards last weeks. During XDS2007 at Cambridge, we received the actual specs, 2 PDFs files, 460 pages for the M56 (Mobility X1600, r5xx board) and 436 pages for rv630. The files are now online, but I am not going to give the link here since the web server already suffered too much today (and the URL is easy to find anyway). People should NOT understand this as if we were going to get an open-source driver for all ATI boards next weeks. First, these are only 2D specifications for r500 and r600. 3D specs are expected in a couple weeks once AMD fixes the remaining issues. r300 specs might arrive later. Also, these specifications are pretty hard to use, as expected. There are something like 9000 registers in r6xx boards, and these PDF files are just description of these registers. It's of course far away from being a "How-to write a driver?". So if you want to help, there will be lots of things to do. We also got a demo from some Suse guys of the driver there are working on. They got these specs about 2 months ago. And they already have some part of modesetting working. It means you might get your X server to start. There are about one hundreds different chipsets with their own quirks, so still lots of work to do. And then lots of performance things, 3D, ... to do. This driver is called "Radeon HD" so far. It will remain separated for now, but it might end up being merged with the upstream ATI driver later since the 3D engine are very similar (I personally hope it will happen). The reverse-engineered Avivo driver is dead now then. Fortunately, Jerome Glisse has been aware of the event for a couple months now, so he didn't waste to much time working on it. xserver-xorg-video-avivo will remain in Debian experimental until the new driver arrives. The good thing about Avivo is that it (and the ongoing Nouveau driver for Nvidia) led to the development of very nice reverse-engineering tools (revenge, renouveau, mmio trace, ...). They will be very useful for other graphic boards, or even some network hardware. Of course, many thanks to AMD for making this happen!
(Permanent link

27 July 2007

Evan Prodromou: 7 Thermidor CCXV

We've had a lot of great blog traffic about Vinismo. I thought I'd try to pull together a few of the better ones. All in all it's been pretty good. I hope we'll get some more, though! tags:

DemoCampMontreal3 report So, it's been a couple of days and I should probably get around to posting my own DemoCampMontreal3 report.
  • Niko and I started off with our own demo for Vinismo. It was a lot of fun: we talked about the reasons for starting the site, the technical, information architecture and graphics/UI design challenges, and what our future extensions are going to be. At the end of it, we took some questions, which was fun. The most interesting for me was from Roberto Rocha, whose TechnoCit is one of my favourite tech columns in Montreal. He asked, "Your typical contributor will be much older. What will you do to make your wiki more accessible to them?" It was a good question I don't have an answer to yet, but I want to think about it more.
  • The second demo was by Heri Rakotomalala, who showed off his social-networking GTD tool, WorkCruncher. It's a TODO list with a twist: items that you don't get done age off the list. You have to re-commit to doing a task on an almost daily basis. I think it's a great and refreshing design; my TODO list gets depressing long and filled with unfinishable tasks, and I get too intimidated to work on ones that really matter. I think Heri might have to make some concessions to people's expectations for TODO lists -- maybe a way to automatically archive tasks, rather than deleting them entirely...?
  • The third demo was by the gang from Defensio, who are providing an great anti-spam Web service similar to Akismet. They had a few examples of where they're different, but I'm not well-versed enough in comment spam issues to understand them. My guess is that since they're getting into the market after Akismet, though, they have the opportunity to make a smarter technology. Their one downside? They used slides -- which the rules of DemoCamp. They did demo the service, though.
  • Fourth up was the indefatigable Simon Law. Simon's project? To turn back time. Talk about ambitious! His effort consisted of making a typical kitchen clock turn backwards. He disassembled the clock and explained how it worked to the audience. It was great, except for two things: the clock didn't work at the end of the demo (although he got it working by the end of DemoCamp), and he took a few minutes to draw a diagram of the clock; in my mind, that's just a low-tech PowerPoint slide.
  • Fifth, and quite fascinating, was a tool that Jerome Paradis showed off. It was a Google-Maps mashup that filtered special emails for an informal private jet sharing network. Apparently, companies who charter private jets often have space in the jets, so they'll make that space available. People who need a last-minute charter jet can send email from their Blackberries and such, and if there's availability they get contacted by the charter companies. The interesting part? These people use a highly structured lingo ("O/W" = "one way"), and Jerome's tool scrapes these emails to make the data into a mapping app for his customers. Very interesting!
All in all, it was a good night -- probably made better by the Argentina Cabernet Sauvignon I had. There's a DemoCampMontreal4 scheduled for August 17th, but I won't be in town for it. Too bad for me! tags:

16 July 2007

Zak B. Elep: First Day Morphs

I seriously lack originality on that title above. Nevertheless, that gives some clues on what I’m up to: I rode my first airplane flight ever to Cebu with Jerome last Sunday to meet up with the rest of the pioneering team of Winston’s new startup, Morph Labs. Since it was a Sunday, there wasn’t much talk about work, but we did get to play with some of the new stuff we will be using, like Macbooks, iPhones, Airports, and a Mac Mini. Hint-hint indeed :P Thus began the first day, on a Monday. We got up to speed (from a 3-to-7 AM hibernate,) configuring the rest of the Macbooks, getting the HSDPA modems to work, laying out an ad-hoc WiFi net for the moment, and starting to plan out the directions the company will take. So far, so good. More to come, on Second Life Morphs. (I seriously need to express myself better.)

15 May 2007

Benjamin Mako Hill: Ubuntu Community Council

Very quietly, the Ubuntu community reached a major milestone today when we held a Community Council meeting, like it does fortnightly. The only thing different was that the council included five new members -- Mike Basinger, Corey Burger, Matthew East, Jerome S. Gotangco and Daniel Holbach. These members are, with the exception of Holbach, not employed by Canonical and were each confirmed by a vote of the full Ubuntu membership. Before the recent elections, I was the only member who was not a Canonical employee -- and I used to be one. From a technical perspective, the founding Ubuntu team was able to benefit from everything that Debian had built -- a running start if there ever was one. From a community perspective though, we had to start from scratch and had to deal with the very difficult situation that paid labor and closely entangled corporate interests. Working with the rest of the team, I drafted a set of community norms (the Code of Conduct) and governance structures designed to keep both the community and Canonical under control. They seemed like good ideas but, because we didn't have a community yet, only reflected the sensibilities of Mark Shuttleworth, myself, and the rest of the early Ubuntu team. The highest Ubuntu governance board, the Community Council was initially filled with people that were in the room in Oxford when we came up with the idea: myself, Mark, James Troup, and Colin Watson. We decided that the council members should, and would, be approved by a vote of the membership. With no members though, we faced a bit of a bootstrapping problem. Three years later, Ubuntu has a vibrant community with hundreds of enfranchised members who have an up-or-down say on the members of the council itself. When we looked for new potential council members to propose to the community, we tried to pick the most active, most level-headed, and most representative group we could find. It was pleasing to see that only one member of the new CC board works for Canonical; Canonical employees are now outnumbered. It has been interesting to see announcements by Fedora, FreeSpire, OpenSuSE over the last few years proposing systems of more inclusive community governance structures that, perhaps not entirely coincidentally, look a bit like what Ubuntu has built in its attempts to empower users in that sometimes awkward community/company environment. Whatever the reasons, I think it means there's more pressure on us at Ubuntu to keep raising the bar. I see today as a great example of how we've done just that.

16 November 2006

Zak B. Elep: Ubuntu-PH Release Party for 6.10 (Edgy Eft)



Last night I called Ubunteros nearby Manila for the Edgy Eft (belated) release party at the Coffee Bean and Tea Leaf at Greenbelt 3. Little did I know that there will be a lot of folks coming from the just-concluded FOSS@work workshop joining in the fun, thanks to Yolynne Medina and Eric Pareja. Diane Gonzales and I got to the venue first, then followed by the FOSS@Work folks. Dominique Cimafranca, Migs Paraz, Ranulf Goss, Jopes Gallardo, and Joel Bryan Juliano were there too, and all in all we were easily the noisiest group in the coffee shop, seemingly occupying the entirety of the place. I originally planned to move the group to have dinner somewhere, but along the way everybody seemed to forgot dinner and we quite engaged in talking to everyone else. It was terrific. The 2 boxes of Edgy ,K,Ed Ubuntu CDs I brought were easily given away to everyone; we even had them exchanged and autographed (naks!) reminiscent of what Ealden and I did last February when Mark came here. As a finale, we had a group photo of everyone with their CDs; Dominique remarks that in his informal’ study, more and more women prefer Ubuntu (and I sure do think he’ll be blogging more about this soon. ;) Needless to say, the above photo doesn’t do great justice to what happened last night; it came from my elric which I didn’t get use much as a camera since I too was happily chatting away. That said, I expect RJ Ian will be posting his photos from his brand-spanking-new Kodak camera to the Ubuntu-PH site once he gets back to Mindanao with Yolynne and company. I also think the FOSS@Work folks also have their own photosite or wiki to post more photos, which we’ll be seeing sooner. Jerome Gotangco and Ealden Esca an, the guys whom we all owe Ubuntu-PH to, were unfortunately unable to attend last night, as Jerome was off to Cebu to participate in the ICT congress there, while Ealden was quite busy at work. Hopefully they (as well as last night’s attendees!) can attend the next Release Party for 7.04 (aka Feisty Fawn,) and hopefully it will be just as fun, and be more meaningful if more Ubuntu-PH folks get involved in its development! Update: Yolynne and RJ just posted pics fresh from their arrival to home. Expect more pics later, nicely tagged too…

Next.

Previous.